Remote state estimation of large-scale distributed dynamic processes plays an important role in Industry 4.0 applications. In this paper, we focus on the transmission scheduling problem of a remote estimation system. First, we derive some structural properties of the optimal sensor scheduling policy over fading channels. Then, building on these theoretical guidelines, we develop a structure-enhanced deep reinforcement learning (DRL) framework for optimal scheduling of the system to achieve the minimum overall estimation mean-square error (MSE). In particular, we propose a structure-enhanced action selection method, which tends to select actions that obey the policy structure. This explores the action space more effectively and enhances the learning efficiency of DRL agents. Furthermore, we introduce a structure-enhanced loss function to add penalties to actions that do not follow the policy structure. The new loss function guides the DRL to converge to the optimal policy structure quickly. Our numerical experiments illustrate that the proposed structure-enhanced DRL algorithms can save the training time by 50% and reduce the remote estimation MSE by 10% to 25% when compared to benchmark DRL algorithms. In addition, we show that the derived structural properties exist in a wide range of dynamic scheduling problems that go beyond remote state estimation.
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
translated by 谷歌翻译
We study a novel and important communication pattern in large-scale model-parallel deep learning (DL), which we call cross-mesh resharding. This pattern emerges when the two paradigms of model parallelism - intra-operator and inter-operator parallelism - are combined to support large models on large clusters. In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh, on which the tensor may be distributed with the same or different layouts. We formalize this as a many-to-many multicast communication problem, and show that existing approaches either are sub-optimal or do not generalize to different network topologies or tensor layouts, which result from different model architectures and parallelism strategies. We then propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule. On microbenchmarks, our overall system outperforms existing ones by up to 10x across various tensor and mesh layouts. On end-to-end training of two large models, GPT-3 and U-Transformer, we improve throughput by 10% and 50%, respectively.
translated by 谷歌翻译
The SNMMI Artificial Intelligence (SNMMI-AI) Summit, organized by the SNMMI AI Task Force, took place in Bethesda, MD on March 21-22, 2022. It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA), and considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine. In what follows, essential issues, challenges, controversies and findings emphasized in the meeting are summarized.
translated by 谷歌翻译
在广泛的应用中存在针刺问题,包括罕见疾病预测,生态资源管理,欺诈检测和材料特性优化。当相对于数据集大小的最佳条件存在极端不平衡时,就会出现针中的问题。例如,在开放式材料项目数据库中,在146K总材料中,只有0.82%的泊松比为负。但是,当前的最新优化算法并未设计出能够找到这些具有挑战性的多维针中问题的解决方案,从而导致与全球最佳或pige孔变为当地最低限度的缓慢收敛。在本文中,我们提出了一种基于缩放记忆的初始化算法,标题为Zombi,该算法构建了常规的贝叶斯优化原则,以在更少的时间和更少的实验中快速有效地优化针中的针刺问题,并通过解决常见的融合和常见的融合和较少的实验。鸽子问题。 Zombi从先前表现最佳的评估实验中积极提取知识,以在采样搜索范围内迭代放大到全局最佳的“针”,然后预留出低表现的历史实验的记忆,以加速计算时间。我们验证了该算法在两种现实世界中的5维针中的性能上的性能:发现辅助泊松比的发现和发现高热电图的优点材料的发现。与传统的贝叶斯优化相比,Zombi算法显示了400倍的计算时间加速度,并有效地发现了100个以下实验的材料,高达3倍的材料比当前最新算法发现的材料高度优化。
translated by 谷歌翻译
在硅组织模型中,可以评估磁共振成像的定量模型。这包括对成像生物标志物和组织微结构参数的验证和灵敏度分析。我们提出了一种新的方法来生成心肌微结构的现实数值幻影。我们扩展了以前的研究,该研究考虑了心肌细胞的变异性,心肌细胞(插入式椎间盘)之间的水交换,心肌微结构混乱和四个钣金方向。在该方法的第一阶段,心肌细胞和钣金是通过考虑心肌到骨膜细胞连接的形状变异性和插入式椎间盘而产生的。然后,将薄板汇总和定向在感兴趣的方向上。我们的形态计量学研究表明,数值和真实(文献)心肌细胞数据的体积,长度以及一级和次要轴的分布之间没有显着差异($ p> 0.01 $)。结构相关性分析证实了硅内组织与实际组织的混乱类别相同。此外,心肌细胞的模拟螺旋角(HA)和输入HA(参考值)之间的绝对角度差($ 4.3^\ Circ \ PM 3.1^\ Circ $)与所测量HA之间的绝对角差有很好的一致性使用实验性心脏扩散张量成像(CDTI)和组织学(参考值)(Holmes等,2000)($ 3.7^\ Circ \ PM6.4^\ Circ $)和(Scollan等,1998)($ 4.9) ^\ circ \ pm 14.6^\ circ $)。使用结构张量成像(黄金标准)和实验性CDTI,输入和模拟CDTI的特征向量和模拟CDTI的角度之间的角度距离小于测量角度之间的角度距离。这些结果证实,所提出的方法比以前的研究可以为心肌产生更丰富的数值幻象。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
当前的多类多类别对象跟踪(MOT)指标使用类标签来分组跟踪结果以进行每类评估。同样,MOT方法通常仅将对象与相同的类预测相关联。这两种MOT中的普遍策略隐含地假设分类性能几乎完美。但是,这远非最近的大型MOT数据集中的情况,这些数据集包含许多罕见或语义上类似类别的类别。因此,所得的不正确分类导致跟踪器的基准跟踪和基准不足。我们通过将分类与跟踪无关,以解决这些问题。我们引入了一个新的指标,跟踪所有准确性(TETA),将跟踪测量测量分为三个子因素:本地化,关联和分类,即使在不准确的分类下,也可以全面地跟踪性能的基准测试。 TETA还处理了大规模跟踪数据集中具有挑战性的不完整注释问题。我们进一步介绍了使用类示例匹配(CEM)执行关联的每件事跟踪器(TETER)。我们的实验表明,TETA对跟踪器进行更全面的评估,并且与最先进的ART相比,TETE对挑战性的大规模数据集BDD100K和TAO进行了重大改进。
translated by 谷歌翻译
Majorana示威者是一项领先的实验,寻找具有高纯净锗探测器(HPGE)的中性s中性双β衰变。机器学习提供了一种最大化这些检测器提供的信息量的新方法,但是与传统分析相比,数据驱动的性质使其不可解释。一项可解释性研究揭示了机器的决策逻辑,使我们能够从机器中学习以反馈传统分析。在这项工作中,我们介绍了Majorana演示者数据的第一个机器学习分析。这也是对任何锗探测器实验的第一个可解释的机器学习分析。训练了两个梯度增强的决策树模型,以从数据中学习,并进行了基于游戏理论的模型可解释性研究,以了解分类功率的起源。通过从数据中学习,该分析识别重建参数之间的相关性,以进一步增强背景拒绝性能。通过从机器中学习,该分析揭示了新的背景类别对相互利用的标准Majorana分析的重要性。该模型与下一代锗探测器实验(如传说)高度兼容,因为它可以同时在大量探测器上进行训练。
translated by 谷歌翻译